32 research outputs found

    Tactile Guidance for Policy Adaptation

    Full text link

    A Survey of Tactile Human-Robot Interactions

    Get PDF
    Robots come into physical contact with humans in both experimental and operational settings. Many potential factors motivate the detection of human contact, ranging from safe robot operation around humans, to robot behaviors that depend on human guidance. This article presents a review of current research within the field of Tactile Human–Robot Interactions (Tactile HRI), where physical contact from a human is detected by a robot during the execution or development of robot behaviors. Approaches are presented from two viewpoints: the types of physical interactions that occur between the human and robot, and the types of sensors used to detect these interactions. We contribute a structure for the categorization of Tactile HRI research within each viewpoint. Tactile sensing techniques are grouped into three categories, according to what covers the sensors: (i) a hard shell, (ii) a flexible substrate or (iii) no covering. Three categories of physical HRI likewise are identified, consisting of contact that (i) interferes with robot behavior execution, (ii) contributes to behavior execution and (iii) contributes to behavior development. We populate each category with the current literature, and furthermore identify the state-of-the-art within categories and promising areas for future research

    Assessing Interaction Dynamics in the Context of Robot Programming by Demonstration

    Get PDF
    In this paper we focus on human-robot interaction particularities that occur during programming by demonstration. Understanding what makes the interaction rewarding and keeps the user engaged helps optimize the robot's learning. Two user studies are presented. The first one validates facially-displayed expressions on the iCub robot. The best recognized displays are then used in a second study, along with other ways of providing feedback during teaching a manipulation task to a robot. We determine the preferred and more effective way of providing feedback in relation to the robot's tactile sensing, in order to improve the teaching interaction and to keep the users engaged throughout the interaction

    Tactile Correction and Multiple Training Data Sources for Robot Motion Control

    Get PDF
    This work considers our approach to robot motion control learning from the standpoint of multiple data sources. Our paradigm derives data from human teachers providing task demonstrations and tactile corrections for policy refinement and reuse. We contribute a novel formalization for this data, and identify future directions for the algorithm to reason explicitly about differences in data source

    A Human-Robot Interaction Perspective on Assistive and Rehabilitation Robotics

    Get PDF
    Assistive and rehabilitation devices are a promising and challenging field of recent robotics research. Motivated by societal needs such as aging populations, such devices can support motor functionality and subject training. The design, control, sensing, and assessment of the devices become more sophisticated due to a human in the loop. This paper gives a human–robot interaction perspective on current issues and opportunities in the field. On the topic of control and machine learning, approaches that support but do not distract subjects are reviewed. Options to provide sensory user feedback that are currently missing from robotic devices are outlined. Parallels between device acceptance and affective computing are made. Furthermore, requirements for functional assessment protocols that relate to real-world tasks are discussed. In all topic areas, the design of human-oriented frameworks and methods is dominated by challenges related to the close interaction between the human and robotic device. This paper discusses the aforementioned aspects in order to open up new perspectives for future robotic solutions

    Continuing Robot Skill Learning after Demonstration with Human Feedback

    No full text
    Though demonstration-based approaches have been successfully applied to learning a variety of robot behaviors, there do exist some limitations. The ability to continue learning after demonstration, based on execution experience with the learned policy, therefore has proven to be an asset to many demonstration-based learning systems. This paper discusses important considerations for interfaces that provide feedback to adapt and improve demonstrated behaviors. Feedback interfaces developed for two robots with very different motion capabilities - a wheeled mobile robot and high degree-of-freedom humanoid - are highlighted
    corecore